28 research outputs found
Recommended from our members
Proceedings of IJCAI International Workshop on Neural-Symbolic Learning and Reasoning NeSy 2005
Recommended from our members
Deep Logic Networks: Inserting and Extracting Knowledge from Deep Belief Networks
Developments in deep learning have seen the use of layerwise unsupervised learning combined with supervised learning for fine-tuning. With this layerwise approach, a deep network can be seen as a more modular system that lends itself well to learning representations. In this paper, we investigate whether such modularity can be useful to the insertion of background knowledge into deep networks, whether it can improve learning performance when it is available, and to the extraction of knowledge from trained deep networks, and whether it can offer a better understanding of the representations learned by such networks. To this end, we use a simple symbolic language - a set of logical rules that we call confidence rules - and show that it is suitable for the representation of quantitative reasoning in deep networks. We show by knowledge extraction that confidence rules can offer a low-cost representation for layerwise networks (or restricted Boltzmann machines). We also show that layerwise extraction can produce an improvement in the accuracy of deep belief networks. Furthermore, the proposed symbolic characterization of deep networks provides a novel method for the insertion of prior knowledge and training of deep networks. With the use of this method, a deep neural-symbolic system is proposed and evaluated, with the experimental results indicating that modularity through the use of confidence rules and knowledge insertion can be beneficial to network performance
Recommended from our members
Rule Extraction from Support Vector Machines: A Geometric Approach. Technical Report
This paper presents a new approach to rule extraction from Support Vector Machines. SVMs have been applied successfully in many areas with excellent generalization results; rule extraction can offer explanation capability to SVMs. We propose to approximate the SVM classification boundary through querying followed by clustering, searching and then to extract rules by solving an optimization problem. Theoretical proof and experimental results then indicate that the rules can be used to validate the SVM results, since maximum fidelity with high accuracy can be achieved
Recommended from our members
Applied temporal Rule Mining to Time Series
Association rule mining from time series has attracted considerable interest over the last years and various methods have been developed. Temporal rules between discovered episodes provide useful knowledge for the dynamics of the problem domain and the underlying data generating process. However, temporal rule mining has received little attention over the last years. In addition, the proposed methods suffer from two significant drawbacks. First the rules they produce are not robust enough with respect to noise. Second the proposed methods are highly dependent on the choice of the parameters since small perturbations on the parameters lead to significantly different results. In this paper we propose a frame-work to derive temporal rules from time series. Our approach is based on episode rule mining that discovers temporal rules from time series in the frequency domain using the discrete cosine transform. The rules are then translated to temporal relations between time series patterns of arbitrary length. Experimental results of the proposed framework are presented in the relevant section
Recommended from our members
Neural-Symbolic Monitoring and Adaptation
Runtime monitors check the execution of a system under scrutiny against a set of formal specifications describing a prescribed behaviour. The two core properties for monitoring systems are scalability and adaptability. In this paper we show how RuleRunner, our previous neural-symbolic monitoring system, can exploit learning strategies in order to integrate desired deviations with the initial set of specification. The resulting system allows for fast conformance checking and can suggest possible enhanced models when the initial set of specifications has to be adapted in order to include new patterns
Logic tensor networks for semantic image interpretation
Semantic Image Interpretation (SII) is the task of extracting structured semantic descriptions from images. It is widely agreed that the combined use of visual data and background knowledge is of great importance for SII. Recently, Statistical Relational Learning (SRL) approaches have been developed for reasoning under uncertainty and learning in the presence of data and rich knowledge. Logic Tensor Networks (LTNs) are a SRL framework which integrates neural networks with first-order fuzzy logic to allow (i) efficient learning from noisy data in the presence of logical constraints, and (ii) reasoning with logical formulas describing general properties of the data. In this paper, we develop and apply LTNs to two of the main tasks of SII, namely, the classification of an image's bounding boxes and the detection of the relevant part-of relations between objects. To the best of our knowledge, this is the first successful application of SRL to such SII tasks. The proposed approach is evaluated on a standard image processing benchmark. Experiments show that background knowledge in the form of logical constraints can improve the performance of purely data-driven approaches, including the state-of-theart Fast Region-based Convolutional Neural Networks (Fast R-CNN). Moreover, we show that the use of logical background knowledge adds robustness to the learning system when errors are present in the labels of the training data
Recommended from our members
Proceedings of ECAI International Workshop on Neural-Symbolic Learning and reasoning NeSy 2006
Recommended from our members
Measurable counterfactual local explanations for any classifier
We propose a novel method for explaining the predictions of any classifier. In our approach, local explanations are expected to explain both the outcome of a prediction and how that prediction would change if athings had been different'. Furthermore, we argue that satisfactory explanations cannot be dissociated from a notion and measure of fidelity, as advocated in the early days of neural networks' knowledge extraction. We introduce a definition of fidelity to the underlying classifier for local explanation models which is based on distances to a target decision boundary. A system called CLEAR: Counterfactual Local Explanations via Regression, is introduced and evaluated. CLEAR generates b-counterfactual explanations that state minimum changes necessary to flip a prediction's classification. CLEAR then builds local regression models, using the b-counterfactuals to measure and improve the fidelity of its regressions. By contrast, the popular LIME method [17], which also uses regression to generate local explanations, neither measures its own fidelity nor generates counterfactuals. CLEAR's regressions are found to have significantly higher fidelity than LIME's, averaging over 40% higher in this paper's five case studies
Recommended from our members
Neural-symbolic integration for fairness in AI
Deep learning has achieved state-of-the-art results in various application domains ranging from image recognition to language translation and game playing. However, it is now generally accepted that deep learning alone has not been able to satisfy the requirement of fairness and, ultimately, trust in Artificial Intelligence (AI). In this paper, we propose an interactive neural-symbolic approach for fairness in AI based on the Logic Tensor Network (LTN) framework. We show that the extraction of symbolic knowledge from LTN-based deep networks combined with fairness constraints offer a general method for instilling fairness into deep networks via continual learning. Explainable AI approaches which otherwise could identify but not fix fairness issues are shown to be enriched with an ability to improve fairness results. Experimental results on three real-world data sets used to predict income, credit risk and recidivism in financial applications show that our approach can satisfy fairness metrics while maintaining state-of-the-art classification performance
Recommended from our members
Neural-Symbolic Reasoning Under Open-World and Closed-World Assumptions
Neural-Symbolic approaches are becoming increasingly prominent due to their ability to integrate knowledge and data. In this paper, we propose the iterative use of a neurosymbolic approach and evaluate its reasoning capability. We deploy the Logic Tensor Networks neurosymbolic approach iteratively and compare its reasoning capability with purely symbolic reasoning under closed-world and open-world assumptions. Reasoning capability is evaluated on two data sets, a family relationship task and a typical ontology reasoning data set. The use of an iterative neurosymbolic approach improves reasoning from an F1 score of 0.64 to 0.97 in one case, and from 0.60 to 0.88 in the other, which is higher than what was reported previously in the literature. Our results also show that an open-world neurosymbolic approach based on differentiable fuzzy logic can excel at recall, while a logical reasoner under a closed-world assumption can achieve high precision when the domain is under-specified